Skip to content

chore(chart-deps): update cloudnative-pg to version 0.28.1#3233

Merged
merll merged 3 commits into
mainfrom
ci-update-cloudnative-pg-to-0.28.1
May 12, 2026
Merged

chore(chart-deps): update cloudnative-pg to version 0.28.1#3233
merll merged 3 commits into
mainfrom
ci-update-cloudnative-pg-to-0.28.1

Conversation

@svcAPLBot
Copy link
Copy Markdown
Contributor

This PR updates the dependency cloudnative-pg to version 0.28.1.

@svcAPLBot svcAPLBot added the chart-deps Auto generated helm chart dependencies label May 12, 2026
@merll merll marked this pull request as ready for review May 12, 2026 07:42
@svcAPLBot
Copy link
Copy Markdown
Contributor Author

Comparison of Helm chart templating output:

# cloudnative-pg/templates/config.yaml

# cloudnative-pg/templates/crds/crds.yaml

@@ spec.versions.v1.schema.openAPIV3Schema.properties.spec.properties.securityContext.properties.procMount.description @@
# apiextensions.k8s.io/v1/CustomResourceDefinition/clusters.postgresql.cnpg.io
! ± value change in multiline text (no inserts, one deletion)
  procMount denotes the type of proc mount to use for the containers.
  The default value is Default which uses the container runtime defaults for
  readonly paths and masked paths.
- This requires the ProcMountType feature flag to be enabled.
  Note that this field cannot be set when spec.os.name is windows.

@@ spec.versions.v1.schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties @@
# apiextensions.k8s.io/v1/CustomResourceDefinition/poolers.postgresql.cnpg.io
! - one map entry removed:
- workloadRef:
-   type: object
-   description: |
-     WorkloadRef provides a reference to the Workload object that this Pod belongs to.
-     This field is used by the scheduler to identify the PodGroup and apply the
-     correct group scheduling policies. The Workload object referenced
-     by this field may not exist at the time the Pod is created.
-     This field is immutable, but a Workload object with the same name
-     may be recreated with different policies. Doing this during pod scheduling
-     may result in the placement not conforming to the expected policies.
-   required:
-   - name
-   - podGroup
-   properties:
-     name:
-       type: string
-       description: |
-         Name defines the name of the Workload object this Pod belongs to.
-         Workload must be in the same namespace as the Pod.
-         If it doesn't match any existing Workload, the Pod will remain unschedulable
-         until a Workload object is created and observed by the kube-scheduler.
-         It must be a DNS subdomain.
-     podGroup:
-       type: string
-       description: |
-         PodGroup is the name of the PodGroup within the Workload that this Pod
-         belongs to. If it doesn't match any existing PodGroup within the Workload,
-         the Pod will remain unschedulable until the Workload object is recreated
-         and observed by the kube-scheduler. It must be a DNS label.
-     podGroupReplicaKey:
-       type: string
-       description: |
-         PodGroupReplicaKey specifies the replica key of the PodGroup to which this
-         Pod belongs. It is used to distinguish pods belonging to different replicas
-         of the same pod group. The pod group policy is applied separately to each replica.
-         When set, it must be a DNS label.
! + one map entry added:
+ schedulingGroup:
+   type: object
+   description: |
+     SchedulingGroup provides a reference to the immediate scheduling runtime
+     grouping object that this Pod belongs to.
+     This field is used by the scheduler to identify the group and apply the
+     correct group scheduling policies. The association with a group also
+     impacts other lifecycle aspects of a Pod that are relevant in a wider context
+     of scheduling like preemption, resource attachment, etc. If not specified,
+     the Pod is treated as a single unit in all of these aspects.
+     The group object referenced by this field may not exist at the time the
+     Pod is created.
+     This field is immutable, but a group object with the same name may be
+     recreated with different policies. Doing this during pod scheduling
+     may result in the placement not conforming to the expected policies.
+   properties:
+     podGroupName:
+       type: string
+       description: |
+         PodGroupName specifies the name of the standalone PodGroup object
+         that represents the runtime instance of this group.
+         Must be a DNS subdomain.

@@ spec.versions.v1.schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.containers.items.properties.securityContext.properties.procMount.description @@
# apiextensions.k8s.io/v1/CustomResourceDefinition/poolers.postgresql.cnpg.io
! ± value change in multiline text (no inserts, one deletion)
  procMount denotes the type of proc mount to use for the containers.
  The default value is Default which uses the container runtime defaults for
  readonly paths and masked paths.
- This requires the ProcMountType feature flag to be enabled.
  Note that this field cannot be set when spec.os.name is windows.

@@ spec.versions.v1.schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.ephemeralContainers.items.properties.securityContext.properties.procMount.description @@
# apiextensions.k8s.io/v1/CustomResourceDefinition/poolers.postgresql.cnpg.io
! ± value change in multiline text (no inserts, one deletion)
  procMount denotes the type of proc mount to use for the containers.
  The default value is Default which uses the container runtime defaults for
  readonly paths and masked paths.
- This requires the ProcMountType feature flag to be enabled.
  Note that this field cannot be set when spec.os.name is windows.

@@ spec.versions.v1.schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.hostUsers.description @@
# apiextensions.k8s.io/v1/CustomResourceDefinition/poolers.postgresql.cnpg.io
! ± value change in multiline text (one insert, one deletion)
  Use the host's user namespace.
  Optional: Default to true.
  If set to true or not present, the pod will be run in the host user namespace, useful
  for when the pod needs a feature only available to the host user namespace, such as
  loading a kernel module with CAP_SYS_MODULE.
  When set to false, a new userns is created for the pod. Setting false is useful for
  mitigating container breakout vulnerabilities even allowing users to run their
- containers as root without actually having root privileges on the host.
- This field is alpha-level and is only honored by servers that enable the UserNamespacesSupport feature.
+ containers as root without actually having root privileges on the host.

@@ spec.versions.v1.schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.initContainers.items.properties.securityContext.properties.procMount.description @@
# apiextensions.k8s.io/v1/CustomResourceDefinition/poolers.postgresql.cnpg.io
! ± value change in multiline text (no inserts, one deletion)
  procMount denotes the type of proc mount to use for the containers.
  The default value is Default which uses the container runtime defaults for
  readonly paths and masked paths.
- This requires the ProcMountType feature flag to be enabled.
  Note that this field cannot be set when spec.os.name is windows.

@@ spec.versions.v1.schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.resourceClaims.items.description @@
# apiextensions.k8s.io/v1/CustomResourceDefinition/poolers.postgresql.cnpg.io
! ± value change in multiline text (one insert, one deletion)
  PodResourceClaim references exactly one ResourceClaim, either directly
  or by naming a ResourceClaimTemplate which is then turned into a ResourceClaim
  for the pod.
  
  It adds a name to it that uniquely identifies the ResourceClaim inside the Pod.
- Containers that need access to the ResourceClaim reference it with this name.
+ Containers that need access to the ResourceClaim reference it with this name.
+ 
+ When the DRAWorkloadResourceClaims feature gate is enabled and this Pod
+ belongs to a PodGroup, a PodResourceClaim is matched to a
+ PodGroupResourceClaim if all of their fields are equal (Name,
+ ResourceClaimName, and ResourceClaimTemplateName). A matched claim references
+ a single ResourceClaim shared across all Pods in the PodGroup, reserved for
+ the PodGroup in ResourceClaimStatus.ReservedFor rather than for individual
+ Pods.

@@ spec.versions.v1.schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.resourceClaims.items.properties.resourceClaimTemplateName.description @@
# apiextensions.k8s.io/v1/CustomResourceDefinition/poolers.postgresql.cnpg.io
! ± value change in multiline text (one insert, no deletions)
  ResourceClaimTemplateName is the name of a ResourceClaimTemplate
  object in the same namespace as this pod.
  
  The template will be used to create a new ResourceClaim, which will
  
  [one line unchanged)]
  
  will also be deleted. The pod name and resource name, along with a
  generated component, will be used to form a unique name for the
  ResourceClaim, which will be recorded in pod.status.resourceClaimStatuses.
  
+ When the DRAWorkloadResourceClaims feature gate is enabled and the pod
+ belongs to a PodGroup that defines a PodGroupResourceClaim with the same
+ Name and ResourceClaimTemplateName, this PodResourceClaim resolves to the
+ ResourceClaim generated for the PodGroup. All pods in the group that
+ define an equivalent PodResourceClaim matching the
+ PodGroupResourceClaim's Name and ResourceClaimTemplateName share the same
+ generated ResourceClaim. ResourceClaims generated for a PodGroup are
+ owned by the PodGroup and their lifecycles are tied to the PodGroup
+ instead of any individual pod.
+ 
  This field is immutable and no changes will be made to the
  corresponding ResourceClaim by the control plane after creating the
  ResourceClaim.
  
  Exactly one of ResourceClaimName and ResourceClaimTemplateName must
  be set.

@@ spec.versions.v1.schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.volumes.items.properties.image.description @@
# apiextensions.k8s.io/v1/CustomResourceDefinition/poolers.postgresql.cnpg.io
! ± value change in multiline text (one insert, one deletion)
  image represents an OCI object (a container image or artifact) pulled and mounted on the kubelet's host machine.
  The volume is resolved at pod startup depending on which PullPolicy value is provided:
  
  - Always: the kubelet always attempts to pull the reference. Container creation will fail If the pull fails.
  
  [three lines unchanged)]
  
  The volume gets re-resolved if the pod gets deleted and recreated, which means that new remote content will become available on pod recreation.
  A failure to resolve or pull the image during pod startup will block containers from starting and may add significant latency. Failures will be retried using normal volume backoff and will be reported on the pod reason and message.
  The types of objects that may be mounted by this volume are defined by the container runtime implementation on a host machine and at minimum must include all valid types supported by the container image field.
  The OCI object gets mounted in a single directory (spec.containers[*].volumeMounts.mountPath) by merging the manifest layers in the same way as for container images.
- The volume will be mounted read-only (ro) and non-executable files (noexec).
+ The volume will be mounted read-only (ro).
  Sub path mounts for containers are not supported (spec.containers[*].volumeMounts.subpath) before 1.33.
  The field spec.securityContext.fsGroupChangePolicy has no effect on this volume type.

@@ spec.versions.v1.schema.openAPIV3Schema.properties.spec.properties.template.properties.spec.properties.volumes.items.properties.portworxVolume.description @@
# apiextensions.k8s.io/v1/CustomResourceDefinition/poolers.postgresql.cnpg.io
! ± value change in multiline text (one insert, one deletion)
  portworxVolume represents a portworx volume attached and mounted on kubelets host machine.
  Deprecated: PortworxVolume is deprecated. All operations for the in-tree portworxVolume type
- are redirected to the pxd.portworx.com CSI driver when the CSIMigrationPortworx feature-gate
- is on.
+ are redirected to the pxd.portworx.com CSI driver.

# cloudnative-pg/templates/deployment.yaml

@@ spec.template.spec.containers.manager.env.OPERATOR_IMAGE_NAME.value @@
! ± value change
- ghcr.io/cloudnative-pg/cloudnative-pg:1.29.0
+ ghcr.io/cloudnative-pg/cloudnative-pg:1.29.1

@@ spec.template.spec.containers.manager.image @@
! ± value change
- ghcr.io/cloudnative-pg/cloudnative-pg:1.29.0
+ ghcr.io/cloudnative-pg/cloudnative-pg:1.29.1

# cloudnative-pg/templates/monitoring-configmap.yaml

@@ data.queries @@
! ± value change in multiline text (13 inserts, 13 deletions)
  backends:
    query: |
      SELECT sa.datname
        , sa.usename
  
  [twelve lines unchanged)]
  
        SELECT datname
          , state
          , usename
          , COALESCE(application_name, '') AS application_name
-         , COUNT(*)
-         , COALESCE(EXTRACT (EPOCH FROM (max(now() - xact_start))), 0) AS max_tx_secs
+         , pg_catalog.count(*)
+         , COALESCE(EXTRACT (EPOCH FROM (pg_catalog.max(pg_catalog.now() OPERATOR(pg_catalog.-) xact_start))), 0) AS max_tx_secs
        FROM pg_catalog.pg_stat_activity
        GROUP BY datname, state, usename, application_name
-     ) sa ON states.state = sa.state
+     ) sa ON states.state OPERATOR(pg_catalog.=) sa.state
      WHERE sa.usename IS NOT NULL
    metrics:
      - datname:
          usage: "LABEL"
  
  [15 lines unchanged)]
  
          description: "Maximum duration of a transaction in seconds"
  
  backends_waiting:
    query: |
-     SELECT count(*) AS total
+     SELECT pg_catalog.count(*) AS total
      FROM pg_catalog.pg_locks blocked_locks
      JOIN pg_catalog.pg_locks blocking_locks
-       ON blocking_locks.locktype = blocked_locks.locktype
+       ON blocking_locks.locktype OPERATOR(pg_catalog.=) blocked_locks.locktype
        AND blocking_locks.database IS NOT DISTINCT FROM blocked_locks.database
        AND blocking_locks.relation IS NOT DISTINCT FROM blocked_locks.relation
        AND blocking_locks.page IS NOT DISTINCT FROM blocked_locks.page
        AND blocking_locks.tuple IS NOT DISTINCT FROM blocked_locks.tuple
  
  [one line unchanged)]
  
        AND blocking_locks.transactionid IS NOT DISTINCT FROM blocked_locks.transactionid
        AND blocking_locks.classid IS NOT DISTINCT FROM blocked_locks.classid
        AND blocking_locks.objid IS NOT DISTINCT FROM blocked_locks.objid
        AND blocking_locks.objsubid IS NOT DISTINCT FROM blocked_locks.objsubid
-       AND blocking_locks.pid != blocked_locks.pid
-     JOIN pg_catalog.pg_stat_activity blocking_activity ON blocking_activity.pid = blocking_locks.pid
+       AND blocking_locks.pid OPERATOR(pg_catalog.<>) blocked_locks.pid
+     JOIN pg_catalog.pg_stat_activity blocking_activity ON blocking_activity.pid OPERATOR(pg_catalog.=) blocking_locks.pid
      WHERE NOT blocked_locks.granted
    metrics:
      - total:
          usage: "GAUGE"
  
  [33 lines unchanged)]
  
  pg_replication:
    query: |
      SELECT CASE WHEN (
          NOT pg_catalog.pg_is_in_recovery()
-         OR pg_catalog.pg_last_wal_receive_lsn() = pg_catalog.pg_last_wal_replay_lsn())
+         OR pg_catalog.pg_last_wal_receive_lsn() OPERATOR(pg_catalog.=) pg_catalog.pg_last_wal_replay_lsn())
        THEN 0
        ELSE GREATEST (0,
-         EXTRACT(EPOCH FROM (now() - pg_catalog.pg_last_xact_replay_timestamp())))
+         EXTRACT(EPOCH FROM (pg_catalog.now() OPERATOR(pg_catalog.-) pg_catalog.pg_last_xact_replay_timestamp())))
        END AS lag,
        pg_catalog.pg_is_in_recovery() AS in_recovery,
-       EXISTS (TABLE pg_stat_wal_receiver) AS is_wal_receiver_up,
-       (SELECT count(*) FROM pg_catalog.pg_stat_replication) AS streaming_replicas
+       EXISTS (TABLE pg_catalog.pg_stat_wal_receiver) AS is_wal_receiver_up,
+       (SELECT pg_catalog.count(*) FROM pg_catalog.pg_stat_replication) AS streaming_replicas"
    metrics:
      - lag:
          usage: "GAUGE"
          description: "Replication lag behind primary in seconds"
  
  [39 lines unchanged)]
  
  pg_stat_archiver:
    query: |
      SELECT archived_count
        , failed_count
-       , COALESCE(EXTRACT(EPOCH FROM (now() - last_archived_time)), -1) AS seconds_since_last_archival
-       , COALESCE(EXTRACT(EPOCH FROM (now() - last_failed_time)), -1) AS seconds_since_last_failure
+       , COALESCE(EXTRACT(EPOCH FROM (pg_catalog.now() OPERATOR(pg_catalog.-) last_archived_time)), -1) AS seconds_since_last_archival
+       , COALESCE(EXTRACT(EPOCH FROM (pg_catalog.now() OPERATOR(pg_catalog.-) last_failed_time)), -1) AS seconds_since_last_failure
        , COALESCE(EXTRACT(EPOCH FROM last_archived_time), -1) AS last_archived_time
        , COALESCE(EXTRACT(EPOCH FROM last_failed_time), -1) AS last_failed_time
-       , COALESCE(CAST(CAST('x'||pg_catalog.right(pg_catalog.split_part(last_archived_wal, '.', 1), 16) AS pg_catalog.bit(64)) AS pg_catalog.int8), -1) AS last_archived_wal_start_lsn
-       , COALESCE(CAST(CAST('x'||pg_catalog.right(pg_catalog.split_part(last_failed_wal, '.', 1), 16) AS pg_catalog.bit(64)) AS pg_catalog.int8), -1) AS last_failed_wal_start_lsn
+       , COALESCE(CAST(CAST('x' OPERATOR(pg_catalog.||) pg_catalog.right(pg_catalog.split_part(last_archived_wal, '.', 1), 16) AS pg_catalog.bit(64)) AS pg_catalog.int8), -1) AS last_archived_wal_start_lsn
+       , COALESCE(CAST(CAST('x' OPERATOR(pg_catalog.||) pg_catalog.right(pg_catalog.split_part(last_failed_wal, '.', 1), 16) AS pg_catalog.bit(64)) AS pg_catalog.int8), -1) AS last_failed_wal_start_lsn
        , EXTRACT(EPOCH FROM stats_reset) AS stats_reset_time
      FROM pg_catalog.pg_stat_archiver
    predicate_query: |
      SELECT NOT pg_catalog.pg_is_in_recovery()
-       OR pg_catalog.current_setting('archive_mode') = 'always'
+       OR pg_catalog.current_setting('archive_mode') OPERATOR(pg_catalog.=) 'always'
    metrics:
      - archived_count:
          usage: "COUNTER"
          description: "Number of WAL files that have been successfully archived"
  
  [277 lines unchanged)]
  
  
  pg_extensions:
    query: |
      SELECT
-       current_database() as datname,
+       pg_catalog.current_database() as datname,
        name as extname,
        default_version,
        installed_version,
        CASE
-         WHEN default_version = installed_version THEN 0
+         WHEN default_version OPERATOR(pg_catalog.=) installed_version THEN 0
          ELSE 1
      END AS update_available
      FROM pg_catalog.pg_available_extensions
      WHERE installed_version IS NOT NULL
  
  [14 lines unchanged)]
  
          usage: "GAUGE"
          description: "An update is available"
    target_databases:
      - '*'

# cloudnative-pg/templates/mutatingwebhookconfiguration.yaml

# cloudnative-pg/templates/rbac.yaml

# cloudnative-pg/templates/service.yaml

# cloudnative-pg/templates/validatingwebhookconfiguration.yaml

# otomi-api/templates/core-config.yaml

@@ data.core.yaml @@
! ± value change in multiline text (one insert, one deletion)
  adminApps:
  - deps:
    - prometheus
    ingress:
  
  [287 lines unchanged)]
  
    cnpg:
      about: CloudNative PostgreSQL is an open source operator designed to manage PostgreSQL
        workloads on any supported Kubernetes cluster running in private, public, hybrid,
        or multi-cloud environments.
-     appVersion: 1.29.0
+     appVersion: 1.29.1
      chartName: cloudnative-pg
      integration: CloudNativePG is used by App Platform to provide Postgresql database
        for various applications. In the values you can configure a storageprovider
        for backups. The backups can be enabled in settings.
  
  [434 lines unchanged)]
  
      svc: tekton-dashboard
      type: public
    name: tekton
    ownHost: true

# otomi-api/templates/deployment.yaml

# rabbitmq-cluster-operator/templates/messaging-topology-operator/validating-webhook-configuration.yaml

# values-repo.yaml

@merll merll merged commit 0bc59db into main May 12, 2026
16 checks passed
@merll merll deleted the ci-update-cloudnative-pg-to-0.28.1 branch May 12, 2026 07:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

chart-deps Auto generated helm chart dependencies

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants